10 research outputs found
Tailoring three-dimensional topological codes for biased noise
Tailored topological stabilizer codes in two dimensions have been shown to
exhibit high storage threshold error rates and improved subthreshold
performance under biased Pauli noise. Three-dimensional (3D) topological codes
can allow for several advantages including a transversal implementation of
non-Clifford logical gates, single-shot decoding strategies, parallelized
decoding in the case of fracton codes as well as construction of fractal
lattice codes. Motivated by this, we tailor 3D topological codes for enhanced
storage performance under biased Pauli noise. We present Clifford deformations
of various 3D topological codes, such that they exhibit a threshold error rate
of under infinitely biased Pauli noise. Our examples include the 3D
surface code on the cubic lattice, the 3D surface code on a checkerboard
lattice that lends itself to a subsystem code with a single-shot decoder, the
3D color code, as well as fracton models such as the X-cube model, the
Sierpinski model and the Haah code. We use the belief propagation with ordered
statistics decoder (BP-OSD) to study threshold error rates at finite bias. We
also present a rotated layout for the 3D surface code, which uses roughly half
the number of physical qubits for the same code distance under appropriate
boundary conditions. Imposing coprime periodic dimensions on this rotated
layout leads to logical operators of weight at infinite bias and a
corresponding subthreshold scaling of the logical failure rate,
where is the number of physical qubits in the code. Even though this
scaling is unstable due to the existence of logical representations with
low-rate Pauli errors, the number of such representations scales only
polynomially for the Clifford-deformed code, leading to an enhanced effective
distance.Comment: 51 pages, 34 figure
Quantum Machine Learning in High Energy Physics
Machine learning has been used in high energy physics since a long time,
primarily at the analysis level with supervised classification. Quantum
computing was postulated in the early 1980s as way to perform computations that
would not be tractable with a classical computer. With the advent of noisy
intermediate-scale quantum computing devices, more quantum algorithms are being
developed with the aim at exploiting the capacity of the hardware for machine
learning applications. An interesting question is whether there are ways to
combine quantum machine learning with High Energy Physics. This paper reviews
the first generation of ideas that use quantum machine learning on problems in
high energy physics and provide an outlook on future applications.Comment: 25 pages, 9 figures, submitted to Machine Learning: Science and
Technology, Focus on Machine Learning for Fundamental Physics collectio
Absence of Barren Plateaus in Quantum Convolutional Neural Networks
Quantum neural networks (QNNs) have generated excitement around the
possibility of efficiently analyzing quantum data. But this excitement has been
tempered by the existence of exponentially vanishing gradients, known as barren
plateau landscapes, for many QNN architectures. Recently, Quantum Convolutional
Neural Networks (QCNNs) have been proposed, involving a sequence of
convolutional and pooling layers that reduce the number of qubits while
preserving information about relevant data features. In this work we rigorously
analyze the gradient scaling for the parameters in the QCNN architecture. We
find that the variance of the gradient vanishes no faster than polynomially,
implying that QCNNs do not exhibit barren plateaus. This provides an analytical
guarantee for the trainability of randomly initialized QCNNs, which highlights
QCNNs as being trainable under random initialization unlike many other QNN
architectures. To derive our results we introduce a novel graph-based method to
analyze expectation values over Haar-distributed unitaries, which will likely
be useful in other contexts. Finally, we perform numerical simulations to
verify our analytical results.Comment: 9 + 20 pages, 7 + 8 figures, 3 tables. Updated to published versio
Analytical model of impedance in elliptical beam pipes
Beam instabilities are among the main limitations in building higher intensity accelerators. Having a good impedance model for every accelerators is necessary in order to build components that minimize the probability of instabilities caused by the interaction beam-environment and to understand what piece to change in case of intensity increasing. Most of accelerator components have their impedance simulated with finite elements method (using softwares like CST Studio), but simple components such as circular or flat pipes are modeled analytically, with a decreasing computation time and an increasing precision compared to their simulated model. Elliptical beam pipes, while being a simple component present in some accelerators, still misses a good analytical model working for the hole range of velocities and frequencies. In this report, we present a general framework to study the impedance of elliptical pipes analytically. We developed a model for both longitudinal and transverse impedance, first in the case of a perfectly conduction pipe, then taking resistivity into account. We compared our results in the limit cases of a round pipe and a flat pipe with existing models for those two components, and showed that they are identical for longitudinal and quadrupolar impedance, but slightly different for dipolar impedance
Recurrent machines for likelihood-free inference
Likelihood-free inference is concerned with the estimation of the parameters of
a non-differentiable stochastic simulator that best reproduce real observations.
In the absence of a likelihood function, most of the existing inference methods
optimize the simulator parameters through a handcrafted iterative procedure that
tries to make the simulated data more similar to the observations. In this work,
we explore whether meta-learning can be used in the likelihood-free context, for
learning automatically from data an iterative optimization procedure that would
solve likelihood-free inference problems. We design a recurrent inference machine
that learns a sequence of parameter updates leading to good parameter estimates,
without ever specifying some explicit notion of divergence between the simulated
data and the real data distributions. We demonstrate our approach on toy simulators,
showing promising results both in terms of performance and robustness
Noise-assisted variational quantum thermalization
Preparing thermal states on a quantum computer can have a variety of
applications, from simulating many-body quantum systems to training machine
learning models. Variational circuits have been proposed for this task on
near-term quantum computers, but several challenges remain, such as finding a
scalable cost-function, avoiding the need of purification, and mitigating noise
effects. We propose a new algorithm for thermal state preparation that tackles
those three challenges by exploiting the noise of quantum circuits. We consider
a variational architecture containing a depolarizing channel after each unitary
layer, with the ability to directly control the level of noise. We derive a
closed-form approximation for the free-energy of such circuit and use it as a
cost function for our variational algorithm. By evaluating our method on a
variety of Hamiltonians and system sizes, we find several systems for which the
thermal state can be approximated with a high fidelity. However, we also show
that the ability for our algorithm to learn the thermal state strongly depends
on the temperature: while a high fidelity can be obtained for high and low
temperatures, we identify a specific range for which the problem becomes more
challenging. We hope that this first study on noise-assisted thermal state
preparation will inspire future research on exploiting noise in variational
algorithms.Comment: 13 pages, 7 figures. Submitted to Scientific Report
MD2065: Emittance exchange with linear coupling
In order to better understand the luminosity imbalance between ATLAS and CMS that was observed in 2016, it was proposed to perform a test whereby the horizontal and vertical emittances are exchanged by crossing the tunes in the presence of linear coupling. The luminosity before and after the exchange could be compared to see if the imbalance stems purely from the uneven emittances or if there is an additional mechanism in play. However, due to limited machine availability only tests at injection were able to performed